131 research outputs found

    A characteristic polynomial

    Get PDF

    Development of ship financing : a study of the 2008 financial crisis

    Get PDF
    This thesis looks to examine the period before and after the financial crisis of 2008 in order to identify any potential shifts in ship financing. For our period of analysis, we defined the pre period from the start of 2005 until the end of august 2008, while the post period is defined as the period from September 2008 until the end of 2012. In our empirical analysis we have used inferential statistics to test our predictions. The data used have been gathered from two world-renowned shipping information providers, Clarksons and Marine Money. By pooling and later segmenting the provided data, we have created our own database, tailored for our research questions. Our analysis shows that there has indeed been a shift from the traditional financing source of bank loans towards corporate bonds. By the end of 2012, bond issuance stood for almost 45% of ship financing, up 40% from the start of the sample. Such a shift also involved a change in location of funding, with Asia and Scandinavia providing significantly greater number of debt issuances in the aftermath of the financial crisis, while North America, Europe and the Middle East experienced a deterioration of their funding proportions. In addition, the use of public equity markets as means of financing has greatly declined, resulting in a greater reliance on debt in the post period. Given the increased importance of bonds, the authors have also examined this instrument in more detail. Our findings show that bondholders demand higher return and are less willing to engage in long-term commitments in the post period, as a result of the greater market uncertainty. Such an uncertainty has also caused banks to alter their lending practice, with a greater focus on risk mitigation. Our takeaway from our analysis is quite extreme, with a severe change in ship financing over the last eight years. Looking into the future, we do believe that the ship financing picture has changed permanently, but in a less radical way than what we have observed in our sample. We expect bonds to take a larger part in ship financing, nevertheless, we still expect bank loans to be the primary source of capital

    A survey of machine learning-based methods for COVID-19 medical image analysis

    Get PDF
    The ongoing COVID-19 pandemic caused by the SARS-CoV-2 virus has already resulted in 6.6 million deaths with more than 637 million people infected after only 30 months since the first occurrences of the disease in December 2019. Hence, rapid and accurate detection and diagnosis of the disease is the first priority all over the world. Researchers have been working on various methods for COVID-19 detection and as the disease infects lungs, lung image analysis has become a popular research area for detecting the presence of the disease. Medical images from chest X-rays (CXR), computed tomography (CT) images, and lung ultrasound images have been used by automated image analysis systems in artificial intelligence (AI)- and machine learning (ML)-based approaches. Various existing and novel ML, deep learning (DL), transfer learning (TL), and hybrid models have been applied for detecting and classifying COVID-19, segmentation of infected regions, assessing the severity, and tracking patient progress from medical images of COVID-19 patients. In this paper, a comprehensive review of some recent approaches on COVID-19-based image analyses is provided surveying the contributions of existing research efforts, the available image datasets, and the performance metrics used in recent works. The challenges and future research scopes to address the progress of the fight against COVID-19 from the AI perspective are also discussed. The main objective of this paper is therefore to provide a summary of the research works done in COVID detection and analysis from medical image datasets using ML, DL, and TL models by analyzing their novelty and efficiency while mentioning other COVID-19-based review/survey researches to deliver a brief overview on the maximum amount of information on COVID-19-based existing researches. [Figure not available: see fulltext.

    Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust

    Get PDF
    Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, UNet, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy

    Genomic biomarker discovery in disease progression and therapy response in bladder cancer utilizing machine learning

    Get PDF
    Cancer in all its forms of expression is a major cause of death. To identify the genomic reason behind cancer, discovery of biomarkers is needed. In this paper, genomic data of bladder cancer are examined for the purpose of biomarker discovery. Genomic biomarkers are indicators stemming from the study of the genome, either at a very low level based on the genome sequence itself, or more abstractly such as measuring the level of gene expression for different disease groups. The latter method is pivotal for this work, since the available datasets consist of RNA sequencing data, transformed to gene expression levels, as well as data on a multitude of clinical indicators. Based on this, various methods are utilized such as statistical modeling via logistic regression and regularization techniques (elastic-net), clustering, survival analysis through Kaplan–Meier curves, and heatmaps for the experiments leading to biomarker discovery. The experiments have led to the discovery of two gene signatures capable of predicting therapy response and disease progression with considerable accuracy for bladder cancer patients which correlates well with clinical indicators such as Therapy Response and T-Stage at surgery with Disease Progression in a time-to-event manner

    Serrate RNA effector molecule (SRRT) is associated with prostate cancer progression and is a predictor of poor prognosis in lethal prostate cancer

    Get PDF
    Arsenite-resistance protein 2, also known as serrate RNA effector molecule (ARS2/SRRT), is known to be involved in cellular proliferation and tumorigenicity. However, its role in prostate cancer (PCa) has not yet been established. We investigated the potential role of SRRT in 496 prostate samples including benign, incidental, advanced, and castrate-resistant patients treated by androgen deprivation therapy (ADT). We also explored the association of SRRT with common genetic aberrations in lethal PCa using immunohistochemistry (IHC) and performed a detailed analysis of SRRT expression using The Cancer Genome Atlas (TCGA PRAD) by utilizing RNA-seq, clinical information (pathological T category and pathological Gleason score). Our findings indicated that high SRRT expression was significantly associated with poor overall survival (OS) and cause-specific survival (CSS). SRRT expression was also significantly associated with common genomic aberrations in lethal PCa such as PTEN loss, ERG gain, mutant TP53, or ATM. Furthermore, TCGA PRAD data revealed that high SRRT mRNA expression was significantly associated with higher Gleason scores, PSA levels, and T pathological categories. Gene set enrichment analysis (GSEA) of RNAseq data from the TCGA PRAD cohort indicated that SRRT may play a potential role in regulating the expression of genes involved in prostate cancer aggressiveness. Conclusion: The current data identify the SRRT's potential role as a prognostic for lethal PCa, and further research is required to investigate its potential as a therapeutic target.Prostate Cancer Foundation Young Investigator Award ; Prostate Cancer Canada ; Canadian Cancer Society (CCS

    Drawing Outside the Lines: Tracking-based Gesture Interaction in Mobile Augmented Entertainment

    Get PDF
    We present a proof-of-concept study for tracking- based gesture interaction in an augmented reality setting using tablets. By tracking a pen in front of a tablet using it’s integrated camera, we are able to map certain motions to gestures, which in turn are used to interact with the application. A comparative user study investigates the feasibility and usefulness of our approach with a simple augmented reality board game allowing translation and drawing gestures to move and create virtual board pieces, respectively. In particular, we demonstrate that users can handle it (and to what degree) and that they enjoy it (and what they potentially dislike). The results from the 25 participants of our experiment provide both subjective and objective evidence of the potential of tracking-based gesture interaction for augmented reality gaming

    MCNN-LSTM: Combining CNN and LSTM to classify multi-class text in imbalanced news data

    Get PDF
    Searching, retrieving, and arranging text in ever-larger document collections necessitate more efficient information processing algorithms. Document categorization is a crucial component of various information processing systems for supervised learning. As the quantity of documents grows, the performance of classic supervised classifiers has deteriorated because of the number of document categories. Assigning documents to a predetermined set of classes is called text classification. It is utilized extensively in a wide range of data-intensive applications. However, the fact that real-world implementations of these models are plagued with shortcomings begs for more investigation. Imbalanced datasets hinder the most prevalent high-performance algorithms. In this paper, we propose an approach name multi-class Convolutional Neural Network (MCNN)-Long Short-Time Memory (LSTM), which combines two deep learning techniques, Convolutional Neural Network (CNN) and Long Short-Time Memory, for text classification in news data. CNN's are used as feature extractors for the LSTMs on text input data and have the spatial structure of words in a sentence, paragraph, or document. The dataset is also imbalanced, and we use the Tomek-Link algorithm to balance the dataset and then apply our model, which shows better performance in terms of F1-score (98%) and Accuracy (99.71%) than the existing works. The combination of deep learning techniques used in our approach is ideal for the classification of imbalanced datasets with underrepresented categories. Hence, our method outperformed other machine learning algorithms in text classification by a large margin. We also compare our results with traditional machine learning algorithms in terms of imbalanced and balanced datasets

    INTERVAL ANALYSIS AND COMPLEX CENTERED FORMS

    No full text
    In this paper we first discuss, briefly, the problem of approximating the real numbers with floating point computer representable numbers (a continuous space approximated by a discrete space). This approximation leads to uncertainties in numerical calculations. A tool for estimating and controlling the errors in numerical calculations in a discrete space is interval analysis. Interval analysis is therefore introduced and some basic properties are given. The merits and demerits of interval analysis are then discussed in some detail. As examples of interval analysis tools and algorithms, the natural extension idea as well as Newton's method in one dimension are discussed. The computation of inclusions for the range of functions is furthermore discussed placing particular emphasis on centered forms. We then turn to the definition of a complex interval arithmetic as well as natural extensions in this arithmetic. Here we present a number of results for polynomials and rational functions showing in particular that centered circular complex forms have some nice properties (explicit formulas, convergence, comparisons). Some numerical results are also given. A final brief discussion is given for the problem of subdividing a circle for the purpose of obtaining improved inclusions.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]

    NUMERICALLY COMPUTABLE BOUNDS FOR THE RANGE OF VALUES OF INTERVAL POLYNOMIALS

    No full text
    A central problem in interval analysis is the computation of the range of values of an interval polynomial over an interval. This problem has been treated by Dussel and Schmitt [1] and, disregarding the computational cost of their algorithm, solved in a satisfactory manner. In this paper we will discuss two algorithms by Rivlin [4] (see also Cargo and Shiska [2]) where the accuracy of the bounds depend on the amount of work one is willing to do. The first algorithm is based upon the expression of a polynomial in Bernstein polynomials. This algorithm as given by Rivlin [4] is valid for an estimate over the interval [0,1]. We will generalize the algorithm to an arbitrary finite interval and we will show that it is an appropriate algorithm if the width of the interval is not too large. The second algorithm is based upon the mean value theorem. As stated by Rivlin [4] it is valid for the interval [0,1]. We will generalize the algorithm so that it is valid for any finite interval. The algorithms are then generalized to interval arithmetic versions. Finally we compare the algorithms numerically on several polynomials.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]
    corecore